disparity map
- Oceania > Australia (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- Oceania > Australia (0.04)
- Asia > China (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
An Image-Based Path Planning Algorithm Using a UAV Equipped with Stereo Vision
Iz, Selim Ahmet, Unel, Mustafa
This paper presents a novel image-based path planning algorithm that was developed using computer vision techniques, as well as its comparative analysis with well-known deterministic and probabilistic algorithms, namely A* and Probabilistic Road Map algorithm (PRM). The terrain depth has a significant impact on the calculated path safety. The craters and hills on the surface cannot be distinguished in a two-dimensional image. The proposed method uses a disparity map of the terrain that is generated by using a UAV. Several computer vision techniques, including edge, line and corner detection methods, as well as the stereo depth reconstruction technique, are applied to the captured images and the found disparity map is used to define candidate way-points of the trajectory. The initial and desired points are detected automatically using ArUco marker pose estimation and circle detection techniques. After presenting the mathematical model and vision techniques, the developed algorithm is compared with well-known algorithms on different virtual scenes created in the V-REP simulation program and a physical setup created in a laboratory environment. Results are promising and demonstrate effectiveness of the proposed algorithm.
- Asia > Middle East > Republic of Türkiye (0.04)
- Asia > China (0.04)
Enhancing the Quality of 3D Lunar Maps Using JAXA's Kaguya Imagery
Iwashita, Yumi, Moe, Haakon, Cheng, Yang, Ansar, Adnan, Georgakis, Georgios, Stoica, Adrian, Nakashima, Kazuto, Kurazume, Ryo, Torresen, Jim
Abstract-- As global efforts to explore the Moon intensify, the need for high-quality 3D lunar maps becomes increasingly critical--particularly for long-distance missions such as NASA's Endurance mission concept, in which a rover aims to traverse 2,000 km across the South Pole-Aitken basin. Kaguya TC (T errain Camera) images, though globally available at 10 m/pixel, suffer from altitude inaccuracies caused by stereo matching errors and JPEG-based compression artifacts. This paper presents a method to improve the quality of 3D maps generated from Kaguya TC images, focusing on mitigating the effects of compression-induced noise in disparity maps. We analyze the compression behavior of Kaguya TC imagery, and identify systematic disparity noise patterns, especially in darker regions. In this paper, we propose an approach to enhance 3D map quality by reducing residual noise in disparity images derived from compressed images. Our experimental results show that the proposed approach effectively reduces elevation noise, enhancing the safety and reliability of terrain data for future lunar missions.
- Europe > Norway > Eastern Norway > Oslo (0.04)
- North America > United States > California > Los Angeles County > Pasadena (0.04)
- Asia > Japan > Kyūshū & Okinawa > Kyūshū > Fukuoka Prefecture > Fukuoka (0.04)
- Asia > China (0.04)
- Government > Space Agency (1.00)
- Government > Regional Government > North America Government > United States Government (0.50)
Stereovision Image Processing for Planetary Navigation Maps with Semi-Global Matching and Superpixel Segmentation
Lu, Yan-Shan, Arana-Catania, Miguel, Upadhyay, Saurabh, Felicetti, Leonard
Mars exploration requires precise and reliable terrain models to ensure safe rover navigation across its unpredictable and often hazardous landscapes. Stereoscopic vision serves a critical role in the rover's perception, allowing scene reconstruction by generating precise depth maps through stereo matching. State-of-the-art Martian planetary exploration uses traditional local block-matching, aggregates cost over square windows, and refines disparities via smoothness constraints. However, this method often struggles with low-texture images, occlusion, and repetitive patterns because it considers only limited neighbouring pixels and lacks a wider understanding of scene context. This paper uses Semi-Global Matching (SGM) with superpixel-based refinement to mitigate the inherent block artefacts and recover lost details. The approach balances the efficiency and accuracy of SGM and adds context-aware segmentation to support more coherent depth inference. The proposed method has been evaluated in three datasets with successful results: In a Mars analogue, the terrain maps obtained show improved structural consistency, particularly in sloped or occlusion-prone regions. Large gaps behind rocks, which are common in raw disparity outputs, are reduced, and surface details like small rocks and edges are captured more accurately. Another two datasets, evaluated to test the method's general robustness and adaptability, show more precise disparity maps and more consistent terrain models, better suited for the demands of autonomous navigation on Mars, and competitive accuracy across both non-occluded and full-image error metrics. This paper outlines the entire terrain modelling process, from finding corresponding features to generating the final 2D navigation maps, offering a complete pipeline suitable for integration in future planetary exploration missions.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.28)
- Asia > Singapore (0.04)
- North America > Canada (0.04)
- Asia > China (0.04)
Hierarchical Neural Architecture Search for Deep Stereo Matching - Supplementary Materials
KITTI 2012 contains 194 training image pairs and 195 test image pairs. We use a maximum disparity level of 192 in this dataset. Most of the stereo pairs are indoor scenes with handcrafted layouts. This dataset contains many thin objects and large disparity ranges. We provide more qualitative results on the SceneFlow, KITTI 2012, KITTI 2015 and Middlebury datasets in Figure 1 2 3 4, respectively.
- Oceania > Australia (0.05)
- North America > Canada (0.05)
- Oceania > Australia (0.04)
- Asia > China (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
StereoMamba: Real-time and Robust Intraoperative Stereo Disparity Estimation via Long-range Spatial Dependencies
Wang, Xu, Xu, Jialang, Zhang, Shuai, Huang, Baoru, Stoyanov, Danail, Mazomenos, Evangelos B.
StereoMamba: Real-time and Robust Intraoperative Stereo Disparity Estimation via Long-range Spatial Dependencies Xu Wang, Jialang Xu, Shuai Zhang, Baoru Huang, Danail Stoyanov, and Evangelos B. Mazomenos Abstract -- Stereo disparity estimation is crucial for obtaining depth information in robot-assisted minimally invasive surgery (RAMIS). While current deep learning methods have made significant advancements, challenges remain in achieving an optimal balance between accuracy, robustness, and inference speed. T o address these challenges, we propose the Stereo-Mamba architecture, which is specifically designed for stereo disparity estimation in RAMIS. Our approach is based on a novel Feature Extraction Mamba (FE-Mamba) module, which enhances long-range spatial dependencies both within and across stereo images. T o effectively integrate multi-scale features from FE-Mamba, we then introduce a novel Multidimensional Feature Fusion (MFF) module. Experiments against the state-of-the-art on the ex-vivo SCARED benchmark demonstrate that StereoMamba achieves superior performance on EPE of 2.64 px and depth MAE of 2.55 mm, the second-best performance on Bad2 of 41.49% and Bad3 of 26.99%, while maintaining an inference speed of 21.28 FPS for a pair of high-resolution images (1280 1024), striking the optimum balance between accuracy, robustness, and efficiency. Furthermore, by comparing synthesized right images, generated from warping left images using the generated disparity maps, with the actual right image, StereoMamba achieves the best average SSIM (0.8970) and PSNR (16.0761), exhibiting strong zero-shot generalization on the in-vivo RIS2017 and StereoMIS datasets. I. INTRODUCTION Stereo endoscopes are routinely employed in robotic-assisted minimally invasive surgery (RAMIS) to visualize the internal anatomy, providing surgeons with depth perception for precise instrument manipulation [1].
Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model
Endres, Jannik, Hahn, Oliver, Corbière, Charles, Schaub-Meyer, Simone, Roth, Stefan, Alahi, Alexandre
-- Omnidirectional depth perception is essential for mobile robotics applications that require scene understanding across a full 360 field of view. Camera-based setups offer a cost-effective option by using stereo depth estimation to generate dense, high-resolution depth maps without relying on expensive active sensing. However, existing omnidirectional stereo matching approaches achieve only limited depth accuracy across diverse environments, depth ranges, and lighting conditions, due to the scarcity of real-world data. We introduce a dedicated two-stage training strategy to utilize the relative monocular depth features for our omnidirectional stereo matching before scale-invariant fine-tuning. DFI-OmniStereo achieves state-of-the-art results on the real-world Helvipad dataset, reducing disparity MAE by approximately 16% compared to the previous best omnidirectional stereo method. I. INTRODUCTION Mobile robots are increasingly being deployed across various domains, including agriculture [1], autonomous driving [2], healthcare [3], search and rescue missions [4], and warehouse automation [5]. In these applications, accurate depth perception is crucial to construct reliable 3D representations of a robot's environment to achieve essential tasks such as path planning, mapping, and manipulation. Traditionally, LiDAR sensors have been the preferred choice for acquiring depth information due to their high precision and 360 field of view.
- Europe > Switzerland > Vaud > Lausanne (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- Research Report (0.82)
- Overview (0.68)
- Information Technology (0.48)
- Health & Medicine (0.48)
- Food & Agriculture > Agriculture (0.34)